skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Watson, Benjamin"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Since their creation, displays have used the top-to-bottom raster scan. In today's interactive applications, this scan is a liability, forcing users to choose between complete frames with synchronization delay; or "torn" frames without this delay. We propose a stochastic scan that enables low-latency, unsynchronized display without tearing. We also discuss an interactive display simulator that allows us to investigate the effects of stochastic and other scans on interaction and imagery. 
    more » « less
    Free, publicly-accessible full text available May 5, 2026
  2. The latest light-field displays have improved greatly, but continue to be based on the approximate pinhole model. For every frame, our real-time technique evaluates a full optical model, and then renders an image predistorted at the sub-pixel level to the current pixel-to-eye light flow, reducing cross-talk and increasing viewing angle. 
    more » « less
    Free, publicly-accessible full text available May 5, 2026
  3. Displays are being used for increasingly interactive applications including gaming, video conferencing, and perhaps most demanding, esports. We review the display needs of esports, and describe how current displays fail to meet them by using a high-latency I/O pipeline. We conclude with research directions that move away from this pipeline and better meet interactive user needs. 
    more » « less
    Free, publicly-accessible full text available December 5, 2025
  4. Rendering for light field displays (LFDs) requires rendering of dozens or hundreds of views, which must then be combined into a single image on the display, making real-time LFD rendering extremely difficult. We introduce light field display point rendering (LFDPR), which meets these challenges by improving eye-based point rendering [Gavane and Watson 2023] with texture-based splatting, which avoids oversampling of triangles mapped to only a few texels; and with LFD-biased sampling, which adjusts horizontal and vertical triangle sampling to match the sampling of the LFD itself. To improve image quality, we introduce multiview mipmapping, which reduces texture aliasing even though compute shaders do not support hardware mipmapping. We also introduce angular supersampling and reconstruction to combat LFD view aliasing and crosstalk. The resulting LFDPR is 2-8x times faster than multiview rendering, with similar comparable quality. 
    more » « less
  5. Current light‐field displays increase resolution and reduce cross‐talk with head tracking, despite using simple lens models. With a more complete model, our real‐time technique uses GPUs to analyze the current frame's light flow at subpixel precision, and to render a matching image that further improves resolution and cross‐talk. 
    more » « less
  6. Video conferencing has become a central part of our daily lives, thanks to the COVID-19 pandemic. Unfortunately, so have its many limitations, resulting in poor support for communicative and social behavior and ultimately, “zoom fatigue.” New technologies will be required to address these limitations, including many drawn from mixed reality (XR). In this paper, our goals are to equip and encourage future researchers to develop and test such technologies. Toward this end, we first survey research on the shortcomings of video conferencing systems, as defined before and after the pandemic. We then consider the methods that research uses to evaluate support for communicative behavior, and argue that those same methods should be employed in identifying, improving, and validating promising video conferencing technologies. Next, we survey emerging XR solutions to video conferencing's limitations, most of which do not employ head-mounted displays. We conclude by identifying several opportunities for video conferencing research in a post-pandemic, hybrid working environment. 
    more » « less
  7. Eye-based point rendering (EPR) can make multiview effects much more practical by adding eye (camera) buffer resolution efficiencies to improved view-independent rendering (iVIR). We demonstrate this very successfully by applying EPR to dynamic cube-mapped reflections, sometimes achieving nearly 7× speedups over iVIR and traditional multiview rendering (MVR), with nearly equivalent quality. Our application to omnidirectional soft shadows is less successful, demonstrating that EPR is most effective with larger shader loads and tight eye buffer to off-screen (render target) buffer mappings. This is due to EPR's eye buffer resolution constraints limiting points and shading calculations to the sampling rate of the eye's viewport. In a 2.48 million triangle scene with 50 reflective objects (using 300 off-screen views), EPR renders environment maps with a 49.40ms average frame time on an NVIDIA 1080 Ti GPU. In doing so, EPR generates up to 5x fewer points than iVIR, and regularly performs 50× fewer shading calculations than MVR. 
    more » « less
  8. Emerging display usage scenarios require head tracking both at short (< 1m) and modest (< 3m) ranges. Yet it is difficult to find low-cost, unobtrusive tracking solutions that remain accurate across this range. By combining multiple head tracking solutions, we can mitigate the weaknesses of one solution with the strengths of another and improve head tracking overall. We built such a combination of two widely available and low-cost trackers, a Tobii Eye Tracker and a Kinect. The resulting system is more effective than Kinect at short range, and than the Tobii at a more distant range. In this paper, we discuss how we accomplish this sensor fusion and compare our combined system to an existing mechanical tracker to evaluate its accuracy across its combined range. 
    more » « less